-
-
Notifications
You must be signed in to change notification settings - Fork 11.1k
[P/D] Add cpu device support for nixl_connector #27510
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[P/D] Add cpu device support for nixl_connector #27510
Conversation
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run You ask your reviewers to trigger select CI tests on top of Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add If you have any questions, please reach out to us on Slack at https://slack.vllm.ai. 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds support for using the nixl_connector on CPU-only environments by including 'cpu' as a supported device in _NIXL_SUPPORTED_DEVICE. The change is straightforward and correctly enables the NixlConnectorWorker to initialize on a CPU platform with a CPU-based KV buffer, which is necessary for features like P/D disaggregation in CPU-only setups. The implementation is correct and aligns with the intended purpose. I have no issues to report with this change.
njhill
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @ZhengHongming888
|
@ZhengHongming888 could you please sign off your commit per the DCO instructions: https://github.com/vllm-project/vllm/pull/27510/checks?check_run_id=53663014635 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| ), | ||
| "tpu": ("cpu",), | ||
| "xpu": ("cpu",), | ||
| "cpu": ("cpu",), | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Enabling CPU device hits unimplemented buffer copy APIs
Adding "cpu": ("cpu",) makes the Nixl connector accept CPU hosts, but the CPU platform still lacks the copy primitives that copy_kv_blocks() uses when kv_buffer_device == "cpu". After a transfer completes, get_finished() calls sync_recved_kv_to_device/save_kv_to_host, which invoke copy_kv_blocks. That function delegates to current_platform.insert_blocks_to_device or swap_out_blocks_to_host (see kv_connector/utils.py), neither of which exist on CpuPlatform. As soon as a request is received or saved, this path will raise an AttributeError, so a CPU-only deployment cannot actually run. Either avoid host-buffer copies for CPU or implement the required methods in CpuPlatform before advertising CPU support here.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ZhengHongming888 this is a good point. Will this change actually have any use without other changes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@njhill yes agree this is good point. The previous PR will make nixl connector accept CPU host and set self.use_host_buffer = True when only kv_buffer_device == "cpu".
Now i updated this PR with add one more option judge to see whether current_device is cpu or not as below -
if current_platform.device_type=='cpu':
self.use_host_buffer = False
else:
self.use_host_buffer = vllm_config.kv_transfer_config.kv_buffer_device == "cpu"
This will set self.use_host_buffer = False and all other nixl and P/D behavior same as other GPUs. You can see the kv transfer happened in /workspace/vllm/tests/v1/kv_connector/nixl_integration/run_accuracy_test.sh (p/d with toy_proxy_server.py) and the output log as below -

@njhill please check again the latest update for CPU P/D disaggregation. Thanks
Signed-off-by: Zheng, Hongming <[email protected]>
84c2f40 to
ecb99d7
Compare
done. thanks @njhill |
| # cpu kv buffer for xfer | ||
| # used when device memory can not be registered under nixl | ||
| self.host_xfer_buffers: dict[str, torch.Tensor] = {} | ||
| self.use_host_buffer = self.kv_buffer_device == "cpu" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If use_host_buffer is True, it will call sync_recved_kv_to_device, which is expected to throw an error in this case.
if self.use_host_buffer:
self.sync_recved_kv_to_device(req_id, meta)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes now it's update. thanks
|
Hi, @njhill, thanks for review the PR, we have updated it and verified locally with our CPU env with accuracy test. |
Purpose
This PR is to add the 'cpu' option for P/D disaggregation or KV transfer based on nixl_connector in pure CPU machine/environment.
After you adding with this option you can do the vllm serve in pure cpu machine or docker image as below -
VLLM_SKIP_WARMUP=True vllm serve meta-llama/Llama-3.2-3B-Instruct --port 8200 --gpu-memory-utilization 0.9 --enforce-eager --tensor-parallel-size 1 --max_model_len 4096 --kv-transfer-config '{"kv_connector":"NixlConnector","kv_role":"kv_both","kv_buffer_device":"cpu"}'which can help on the construction of heterogenous system between cpu and all other gpus based on like llm-d framework.
Thanks.
Test Plan
Test Result
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.